191 research outputs found

    Asymptotic analysis of a multiclass queueing control problem under heavy-traffic with model uncertainty

    Full text link
    We study a multiclass M/M/1 queueing control problem with finite buffers under heavy-traffic where the decision maker is uncertain about the rates of arrivals and service of the system and by scheduling and admission/rejection decisions acts to minimize a discounted cost that accounts for the uncertainty. The main result is the asymptotic optimality of a cμc\mu-type of policy derived via underlying stochastic differential games studied in [16]. Under this policy, with high probability, rejections are not performed when the workload lies below some cut-off that depends on the ambiguity level. When the workload exceeds this cut-off, rejections are carried out and only from the buffer with the cheapest rejection cost weighted with the mean service rate in some reference model. The allocation part of the policy is the same for all the ambiguity levels. This is the first work to address a heavy-traffic queueing control problem with model uncertainty

    Parameter Estimation: The Proper Way to Use Bayesian Posterior Processes with Brownian Noise

    Full text link
    This paper studies a problem of Bayesian parameter estimation for a sequence of scaled counting processes whose weak limit is a Brownian motion with an unknown drift. The main result of the paper is that the limit of the posterior distribution processes is, in general, not equal to the posterior distribution process of the mentioned Brownian motion with the unknown drift. Instead, it is equal to the posterior distribution process associated with a Brownian motion with the same unknown drift and a different standard deviation coefficient. The difference between the two standard deviation coefficients can be arbitrarily large. The characterization of the limit of the posterior distribution processes is then applied to a family of stopping time problems. We show that the proper way to find asymptotically optimal solutions to stopping time problems w.r.t.~the scaled counting processes is by looking at the limit of the posterior distribution processes rather than by the naive approach of looking at the limit of the scaled counting processes themselves. The difference between the performances can be arbitrarily large

    On singular control problems, the time-stretching method, and the weak-M1 topology

    Full text link
    We consider a general class of singular control problems with state constraints. Budhiraja and Ross (2006) established the existence of optimal controls for a relaxed version of this class of problems by using the so-called `time-stretching' method and the J1-topology. We show that the weak-M1 topology is better suited for establishing existence, since it allows to bypass the time-transformations, without any additional effort. Furthermore, we reveal how the time-scaling feature in the definition of the weak-M1 distance embeds the time-stretching method's scheme. This case study suggests that one can benefit from working with the weak-M1 topology in other singular control frameworks, such as queueing control problems under heavy traffic

    A differential game for a multiclass queueing model in the moderate-deviation heavy-traffic regime

    Full text link
    We study a differential game that governs the moderate-deviation heavy-traffic asymptotics of a multiclass single-server queueing control problem with a risk-sensitive cost. We consider a cost set on a finite but sufficiently large time horizon, and show that this formulation leads to stationary feedback policies for the game. Several aspects of the game are explored, including its characterization via a (one-dimensional) free boundary problem, the semi-explicit solution of an optimal strategy, and the specification of a saddle point. We emphasize the analogy to the well-known Harrison-Taksar free boundary problem which plays a similar role in the diffusion-scale heavy-traffic literature

    Universal Anomaly Detection: Algorithms and Applications

    Full text link
    Modern computer threats are far more complicated than those seen in the past. They are constantly evolving, altering their appearance, perpetually changing disguise. Under such circumstances, detecting known threats, a fortiori zero-day attacks, requires new tools, which are able to capture the essence of their behavior, rather than some fixed signatures. In this work, we propose novel universal anomaly detection algorithms, which are able to learn the normal behavior of systems and alert for abnormalities, without any prior knowledge on the system model, nor any knowledge on the characteristics of the attack. The suggested method utilizes the Lempel-Ziv universal compression algorithm in order to optimally give probability assignments for normal behavior (during learning), then estimate the likelihood of new data (during operation) and classify it accordingly. The suggested technique is generic, and can be applied to different scenarios. Indeed, we apply it to key problems in computer security. The first is detecting Botnets Command and Control (C&C) channels. A Botnet is a logical network of compromised machines which are remotely controlled by an attacker using a C&C infrastructure, in order to perform malicious activities. We derive a detection algorithm based on timing data, which can be collected without deep inspection, from open as well as encrypted flows. We evaluate the algorithm on real-world network traces, showing how a universal, low complexity C&C identification system can be built, with high detection rates and low false-alarm probabilities. Further applications include malicious tools detection via system calls monitoring and data leakage identification

    Secure Group Testing

    Full text link
    The principal goal of Group Testing (GT) is to identify a small subset of "defective" items from a large population, by grouping items into as few test pools as possible. The test outcome of a pool is positive if it contains at least one defective item, and is negative otherwise. GT algorithms are utilized in numerous applications, and in many of them maintaining the privacy of the tested items, namely, keeping secret whether they are defective or not, is critical. In this paper, we consider a scenario where there is an eavesdropper (Eve) who is able to observe a subset of the GT outcomes (pools). We propose a new non-adaptive Secure Group Testing (SGT) scheme based on information-theoretic principles. The new proposed test design keeps the eavesdropper ignorant regarding the items' status. Specifically, when the fraction of tests observed by Eve is 0≤δ<10 \leq \delta <1, we prove that with the naive Maximum Likelihood (ML) decoding algorithm the number of tests required for both correct reconstruction at the legitimate user (with high probability) and negligible information leakage to Eve is 11−δ\frac{1}{1-\delta} times the number of tests required with no secrecy constraint for the fixed KK regime. By a matching converse, we completely characterize the Secure GT capacity. Moreover, we consider the Definitely Non-Defective (DND) computationally efficient decoding algorithm, proposed in the literature for non-secure GT. We prove that with the new secure test design, for δ<1/2\delta < 1/2, the number of tests required, without any constraint on KK, is at most 11/2−δ\frac{1}{1/2-\delta} times the number of tests required with no secrecy constraint

    Bandit problems with Levy processes

    Full text link
    Bandit problems model the trade-off between exploration and exploitation in various decision problems. We study two-armed bandit problems in continuous time, where the risky arm can have two types: High or Low; both types yield stochastic payoffs generated by a Levy process. We show that the optimal strategy is a cut-off strategy and we provide an explicit expression for the cut-off and for the optimal payoff.Comment: arXiv admin note: text overlap with arXiv:0906.083

    Serve the shortest queue and Walsh Brownian motion

    Full text link
    We study a single-server Markovian queueing model with NN customer classes in which priority is given to the shortest queue. Under a critical load condition, we establish the diffusion limit of the workload and queue length processes in the form of a Walsh Brownian motion (WBM) living in the union of the NN nonnegative coordinate axes in RN\mathbb{R}^N and a linear transformation thereof. This reveals the following asymptotic behavior. Each time that queues begin to build starting from an empty system, one of them becomes dominant in the sense that it contains nearly all the workload in the system, and it remains so until the system becomes (nearly) empty again. The radial part of the WBM, given as a reflected Brownian motion (RBM) on the half-line, captures the total workload asymptotics, whereas its angular distribution expresses how likely it is for each class to become dominant on excursions. As a heavy traffic result it is nonstandard in three ways: (i) In the terminology of Harrison (1995) it is unconventional, in that the limit is not an RBM. (ii) It does not constitute an invariance principle, in that the limit law (specifically, the angular distribution) is not determined solely by the first two moments of the data, and is sensitive even to tie breaking rules. (iii) The proof method does not fully characterize the limit law (specifically, it gives no information on the angular distribution)

    Risk Sensitive Control of the Lifetime Ruin Problem

    Full text link
    We study a risk sensitive control version of the lifetime ruin probability problem. We consider a sequence of investments problems in Black-Scholes market that includes a risky asset and a riskless asset. We present a differential game that governs the limit behavior. We solve it explicitly and use it in order to find an asymptotically optimal policy.Comment: Final version. To appear in Applied Mathematics and Optimization. Keywords: Probability of lifetime ruin, optimal investment, risk sensitive control, large deviations, differential game

    Secure Adaptive Group Testing

    Full text link
    \emph{Group Testing} (GT) addresses the problem of identifying a small subset of defective items from a large population, by grouping items into as few test pools as possible. In \emph{Adaptive GT} (AGT), outcomes of previous tests can influence the makeup of future tests. Using an information theoretic point of view, Aldridge 20122012 showed that in the regime of a few defectives, adaptivity does not help much, as the number of tests required is essentially the same as for non-adaptive GT. \emph{Secure GT} considers a scenario where there is an eavesdropper who may observe a fraction δ\delta of the tests results, yet should not be able to infer the status of the items. In the non-adaptive scenario, the number of tests required is 1/(1−δ)1/(1-\delta) times the number of tests without the secrecy constraint. In this paper, we consider \emph{Secure Adaptive GT}. Specifically, when during the makeup of the pools one has access to a private feedback link from the lab, of rate RfR_f. We prove that the number of tests required for both correct reconstruction at the legitimate lab, with high probability, and negligible mutual information at the eavesdropper is 1/min{1,1−δ+Rf}1/min\{1,1-\delta+R_f\} times the number of tests required with no secrecy constraint. Thus, unlike non-secure GT, where an adaptive algorithm has only a mild impact, under a security constraint it can significantly boost performance. A key insight is that not only the adaptive link should disregard the actual test results and simply send keys, these keys should be enhanced through a "secret sharing" scheme before usage. We drive sufficiency and necessity bounds that completely characterizes the Secure Adaptive GT capacity
    • …
    corecore